3 - Good Behavior ~> Rationality [ID:21844]
50 von 99 angezeigt

To find a way of talking about the right thing, doing the right thing as an agent, we have

to find a performance measure.

So how do we know if our Roomba is doing the right thing?

Well, for example, we want our apartment free of dirt or something.

So we need some kind of way of talking about how well our Roomba is doing.

For example, other performance measures for a vacuum cleaner could be award one point

per square cleaned up in time t or award one point per clean square per time step minus

one per move.

Or we could add a penalty for if there's more than so and so many dirty squares.

And given such a performance measure, an agent is called rational if it chooses whichever

action maximizes the expected value of the performance measure given the per sub-sequence

to date.

So if we have an agent and we know that it has a certain per sub-history, for example,

we throw the Roomba in our apartment and every square is dirty, then the rational thing for

the Roomba to do would be to start cleaning up, for example, cleaning up where it is.

Because it also could move around first, but maybe if we, for example, go with this one,

then that would be penalized because moving without the necessity to do so doesn't really

help.

We have, yeah, as we talked about before, a rational agent doesn't need to be perfect.

We just need to, we just needed to maximize expected value, a.e. do the best it can.

You don't necessarily, for example, oh, we have examples here.

It doesn't need to predict very unlikely, very unlikely events.

For example, there, if you have a vacuum cleaner robot, it could be possible that there will

be some dust blown into the window and so a place by the window that is just clean could

become dirty again in the next step.

That is probably very unlikely.

It doesn't need to predict that necessarily to be able to act rationally.

Also necessarily, it doesn't really know everything that it would want, maybe want to know via

its persons.

The Roomba, for example, can only tell if something, if there's some dirt below it or

not.

It doesn't necessarily know that, for example, my shoes are dirty and I just entered the

room so it should clean up near the door.

Its sensors don't tell it that.

But again, we're trying to define rationality with, in respect to the information and performance

measures that the agent has.

And the outcomes may not be as expected.

For example, if it's, if its vacuum has been manipulated or something and is set to reverse,

then it makes a square more dirty in trying to clean it up.

But if it can't actually tell that that is the result of it trying to suck up more dirt,

that still doesn't mean it's not necessarily rational.

It still tries to do the best it can, basically.

Makes sense, roughly?

Some nodding.

Good.

Yeah.

An agent is called autonomous if it does not rely on the prior knowledge of the designer.

So if we think back to the industrial arm that you mentioned earlier, an autonomous

industrial arm would be something that maybe senses if there's something to pick up at

the one place and then puts it to the other place.

Teil eines Kapitels:
Rational Agents: a Unifying Framework for Artificial Intelligence

Presenters

Jonas Betzendahl Jonas Betzendahl

Zugänglich über

Offener Zugang

Dauer

00:09:59 Min

Aufnahmedatum

2020-10-26

Hochgeladen am

2020-10-26 11:26:57

Sprache

en-US

Explanation of rationality and PEAS.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen